71 research outputs found
Exploiting Node Connection Regularity for DHT Replication
International audienceCet article présente un protocole de réplication pour les DHT
Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments
The main premise of federated learning (FL) is that machine learning model
updates are computed locally to preserve user data privacy. This approach
avoids by design user data to ever leave the perimeter of their device. Once
the updates aggregated, the model is broadcast to all nodes in the federation.
However, without proper defenses, compromised nodes can probe the model inside
their local memory in search for adversarial examples, which can lead to
dangerous real-world scenarios. For instance, in image-based applications,
adversarial examples consist of images slightly perturbed to the human eye
getting misclassified by the local model. These adversarial images are then
later presented to a victim node's counterpart model to replay the attack.
Typical examples harness dissemination strategies such as altered traffic signs
(patch attacks) no longer recognized by autonomous vehicles or seemingly
unaltered samples that poison the local dataset of the FL scheme to undermine
its robustness. Pelta is a novel shielding mechanism leveraging Trusted
Execution Environments (TEEs) that reduce the ability of attackers to craft
adversarial samples. Pelta masks inside the TEE the first part of the
back-propagation chain rule, typically exploited by attackers to craft the
malicious samples. We evaluate Pelta on state-of-the-art accurate models using
three well-established datasets: CIFAR-10, CIFAR-100 and ImageNet. We show the
effectiveness of Pelta in mitigating six white-box state-of-the-art adversarial
attacks, such as Projected Gradient Descent, Momentum Iterative Method, Auto
Projected Gradient Descent, the Carlini & Wagner attack. In particular, Pelta
constitutes the first attempt at defending an ensemble model against the
Self-Attention Gradient attack to the best of our knowledge. Our code is
available to the research community at https://github.com/queyrusi/Pelta.Comment: 12 pages, 4 figures, to be published in Proceedings 23rd
International Conference on Distributed Computing Systems. arXiv admin note:
substantial text overlap with arXiv:2308.0437
Stress-SGX: Load and Stress your Enclaves for Fun and Profit
The latest generation of Intel processors supports Software Guard Extensions
(SGX), a set of instructions that implements a Trusted Execution Environment
(TEE) right inside the CPU, by means of so-called enclaves. This paper presents
Stress-SGX, an easy-to-use stress-test tool to evaluate the performance of
SGX-enabled nodes. We build on top of the popular Stress-NG tool, while only
keeping the workload injectors (stressors) that are meaningful in the SGX
context. We report on several insights and lessons learned about porting legacy
code to run inside an SGX enclave, as well as the limitations introduced by
this process. Finally, we use Stress-SGX to conduct a study comparing the
performance of different SGX-enabled machines.Comment: European Commission Project: LEGaTO - Low Energy Toolset for
Heterogeneous Computing (EC-H2020-780681
Security, Performance and Energy Trade-offs of Hardware-assisted Memory Protection Mechanisms
The deployment of large-scale distributed systems, e.g., publish-subscribe
platforms, that operate over sensitive data using the infrastructure of public
cloud providers, is nowadays heavily hindered by the surging lack of trust
toward the cloud operators. Although purely software-based solutions exist to
protect the confidentiality of data and the processing itself, such as
homomorphic encryption schemes, their performance is far from being practical
under real-world workloads.
The performance trade-offs of two novel hardware-assisted memory protection
mechanisms, namely AMD SEV and Intel SGX - currently available on the market to
tackle this problem, are described in this practical experience.
Specifically, we implement and evaluate a publish/subscribe use-case and
evaluate the impact of the memory protection mechanisms and the resulting
performance. This paper reports on the experience gained while building this
system, in particular when having to cope with the technical limitations
imposed by SEV and SGX.
Several trade-offs that provide valuable insights in terms of latency,
throughput, processing time and energy requirements are exhibited by means of
micro- and macro-benchmarks.Comment: European Commission Project: LEGaTO - Low Energy Toolset for
Heterogeneous Computing (EC-H2020-780681
FaaSdom: A Benchmark Suite for Serverless Computing
Serverless computing has become a major trend among cloud providers. With
serverless computing, developers fully delegate the task of managing the
servers, dynamically allocating the required resources, as well as handling
availability and fault-tolerance matters to the cloud provider. In doing so,
developers can solely focus on the application logic of their software, which
is then deployed and completely managed in the cloud. Despite its increasing
popularity, not much is known regarding the actual system performance
achievable on the currently available serverless platforms. Specifically, it is
cumbersome to benchmark such systems in a language- or runtime-independent
manner. Instead, one must resort to a full application deployment, to later
take informed decisions on the most convenient solution along several
dimensions, including performance and economic costs. FaaSdom is a modular
architecture and proof-of-concept implementation of a benchmark suite for
serverless computing platforms. It currently supports the current mainstream
serverless cloud providers (i.e., AWS, Azure, Google, IBM), a large set of
benchmark tests and a variety of implementation languages. The suite fully
automatizes the deployment, execution and clean-up of such tests, providing
insights (including historical) on the performance observed by serverless
applications. FaaSdom also integrates a model to estimate budget costs for
deployments across the supported providers. FaaSdom is open-source and
available at https://github.com/bschitter/benchmark-suite-serverless-computing.Comment: ACM DEBS'2
Block Placement Strategies for Fault-Resilient Distributed Tuple Spaces: An Experimental Study - (Practical Experience Report)
The tuple space abstraction provides an easy-to-use programming paradigm
for distributed applications. Intuitively, it behaves like a distributed shared
memory, where applications write and read entries (tuples). When deployed over
a wide area network, the tuple space needs to efficiently cope with faults of links
and nodes. Erasure coding techniques are increasingly popular to deal with such
catastrophic events, in particular due to their storage efficiency with respect to
replication. When a client writes a tuple into the system, this is first striped into
k blocks and encoded into n > k blocks, in a fault-redundant manner. Then, any
k out of the n blocks are sufficient to reconstruct and read the tuple. This paper
presents several strategies to place those blocks across the set of nodes of a
wide area network, that all together form the tuple space. We present the performance
trade-offs of different placement strategies by means of simulations and a
Python implementation of a distributed tuple space. Our results reveal important
differences in the efficiency of the different strategies, for example in terms of
block fetching latency, and that having some knowledge of the underlying network
graph topology is highly beneficia
- …